# Multilingual instruction fine-tuning
Mistral Small 3.2 24B Instruct 2506 Bf16
Apache-2.0
This is an MLX format model converted from Mistral-Small-3.2-24B-Instruct-2506, suitable for instruction following tasks.
Large Language Model Supports Multiple Languages
M
mlx-community
163
1
Samastam It V1
Samastam is an early instruction-tuned variant of the Sarvam-1 model, fine-tuned on the Alpaca-cleaned dataset to support multilingual instruction responses.
Large Language Model
Transformers

S
hathibelagal
188
1
Mistral Small 24B Instruct 2501 GGUF
Apache-2.0
Mistral-Small-24B-Instruct-2501 is a 24B-parameter instruction-finetuned large language model supporting multilingual text generation tasks.
Large Language Model Supports Multiple Languages
M
bartowski
48.61k
111
Llama 3.3 70B Instruct Abliterated GGUF
A 70B-parameter large language model based on the Llama 3.3 architecture, supporting multilingual text generation tasks, optimized through quantization for various hardware environments
Large Language Model Supports Multiple Languages
L
bartowski
7,786
8
Llama 4 Scout 17B 16E Instruct Bnb 4bit
Other
This is the quantized version of the original model meta-llama/Llama-4-Scout-17B-16E-Instruct, optimized with int4 quantization technology, suitable for multilingual tasks.
Large Language Model
Transformers Supports Multiple Languages

L
bnb-community
1,286
1
Llama 4 Maverick 17B 16E Instruct 4bit
Other
A 4-bit quantized model converted from meta-llama/Llama-4-Maverick-17B-128E-Instruct, supporting multilingual text generation tasks
Large Language Model Supports Multiple Languages
L
mlx-community
538
6
Mistral Small 3.1 24B Instruct 2503 Q5 K M GGUF
Apache-2.0
A 24B-parameter instruction-tuned model based on Mistral Small 3.1, supporting multilingual and visual understanding, suitable for local deployment and efficient inference.
Text-to-Image Supports Multiple Languages
M
Triangle104
57
1
Llama 3.1 70B Instruct GGUF
An ultra-low-bit (1-2 bit) quantized model based on Llama-3.1-70B, utilizing IQ-DynamicGate technology for adaptive precision quantization, enhancing accuracy while maintaining memory efficiency.
Large Language Model Supports Multiple Languages
L
Mungert
19.52k
3
Gams 9B Instruct
GaMS-9B-Instruct is a Slovenian generation model based on Google's Gemma 2 series, supporting Slovenian, English, and partially Croatian, Serbian, and Bosnian, with a focus on text generation tasks.
Large Language Model
Safetensors Supports Multiple Languages
G
cjvt
1,652
2
QWQ Stock
A merged model based on multiple Qwen series 32B parameter models, enhanced with Model Stock method for improved multilingual processing capabilities
Large Language Model
Transformers

Q
wanlige
368
7
Salamandra 2b Instruct GGUF
Apache-2.0
A GGUF-format 2B-parameter multilingual instruction fine-tuned model supporting 30+ languages, suitable for text generation tasks.
Large Language Model
Transformers

S
tensorblock
120
1
EXAONE 3.5 32B Instruct Llamafied
Other
This is the llamafied version of the EXAONE-3.5-32B-Instruct model developed by LG AI Research, a large language model supporting English and Korean.
Large Language Model
Transformers Supports Multiple Languages

E
beomi
483
5
Granite 3.0 3b A800m Instruct
Apache-2.0
A 3-billion parameter instruction-tuned language model developed by IBM, based on Granite-3.0 architecture, supporting multilingual tasks and commercial applications
Large Language Model
Transformers

G
ibm-granite
5,240
18
Llama 3.2 3B Instruct Q8 0 GGUF
Llama 3.2 is a 3-billion-parameter instruction-fine-tuned large language model released by Meta, supporting multilingual text generation tasks
Large Language Model Supports Multiple Languages
L
hugging-quants
26.89k
46
Llama 3.2 1B Instruct Q8 0 GGUF
This is Meta's 1 billion parameter instruction-tuned model from the Llama 3.2 series, converted to GGUF format for use with llama.cpp
Large Language Model Supports Multiple Languages
L
hugging-quants
64.04k
31
Mistral Nemo Instruct 2407
Apache-2.0
Mistral-Nemo-Instruct-2407 is a large language model fine-tuned on instructions based on Mistral-Nemo-Base-2407, jointly trained by Mistral AI and NVIDIA, outperforming existing models of similar or smaller size.
Large Language Model
Transformers Supports Multiple Languages

M
mistralai
149.79k
1,519
Aya 23 35B
Aya 23 is an instruction-fine-tuned open-weight research version model with highly advanced multilingual capabilities, supporting 23 languages.
Large Language Model
Transformers Supports Multiple Languages

A
CohereLabs
3,721
282
Llama 3 Wissenschaft 8B
Other
A multilingual hybrid model based on Llama-3-8b, integrating German, Italian, and English capabilities
Large Language Model
Transformers

L
nbeerbower
15
4
Indic Gemma 2b Finetuned Sft Navarasa 2.0
Other
Multilingual instruction model fine-tuned on Gemma-2b, supporting 15 Indian languages and English
Large Language Model
Transformers Supports Multiple Languages

I
Telugu-LLM-Labs
166
24
Bigtranslate
BigTranslate is a multilingual translation model enhanced based on LLaMA-13B, supporting translation tasks in over 100 languages.
Machine Translation
Transformers

B
James-WYang
156
50
Falcon 7B Instruct GPTQ
Apache-2.0
The 4-bit quantized version of Falcon-7B-Instruct, quantized using the AutoGPTQ tool, suitable for efficient inference in resource-constrained environments.
Large Language Model
Transformers English

F
TheBloke
189
67
Flan T5 Xxl Sharded Fp16
Apache-2.0
FLAN-T5 XXL is a variant of Google's T5 model, fine-tuned on over 1,000 additional tasks, supports multiple languages, and outperforms the original T5 model.
Large Language Model
Transformers

F
philschmid
531
54
Featured Recommended AI Models